Goto

Collaborating Authors

 full picture


Google's still not giving us the full picture on AI energy use

MIT Technology Review

"We're not comfortable revealing that for various reasons," Dean told me on our call. The total number is an abstract measure that changes over time, he says, adding that the company wants users to be thinking about the energy usage per prompt. But there are people out there all over the world interacting with this technology, not just me--and what we all add up to seems quite relevant. OpenAI does publicly share its total, sharing recently that it sees 2.5 billion queries to ChatGPT every day. So for the curious, we can use this as an example and take the company's self-reported average energy use per query (0.34 watt-hours) to get a rough idea of the total for all people prompting ChatGPT.


How to Cut Through ChatGPT Hype and Still Appreciate Generative AI's Potential

#artificialintelligence

You'd have to have been living under the proverbial rock over the last 60 days, not to have heard about ChatGPT, a variant of the GPT (generative pre-training transformer) language model designed to generate human-like text in a conversational context. The technology has received massive attention, with predictions that it will do everything from "reshape humanity" to the old chestnuts of taking over human jobs and/or hijacking democracy. But does ChatGPT deserve all this praise? First, ChatGPT is only as good as what it's been trained on, so it may not be able to generate responses to prompts or situations that are not reflected in its training data. ChatGPT's algorithms haven't been trained on any data after 2021, so if you ask it to write something about current news events, it comes up empty.


Robots dress humans without the full picture

Robohub

The robot seen here can't see the human arm during the entire dressing process, yet it manages to successfully get a jacket sleeve pulled onto the arm. Robots are already adept at certain things, such as lifting objects that are too heavy or cumbersome for people to manage. Another application they're well suited for is the precision assembly of items like watches that have large numbers of tiny parts -- some so small they can barely be seen with the naked eye. "Much harder are tasks that require situational awareness, involving almost instantaneous adaptations to changing circumstances in the environment," explains Theodoros Stouraitis, a visiting scientist in the Interactive Robotics Group at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL). "Things become even more complicated when a robot has to interact with a human and work together to safely and successfully complete a task," adds Shen Li, a PhD candidate in the MIT Department of Aeronautics and Astronautics. Li and Stouraitis -- along with Michael Gienger of the Honda Research Institute Europe, Professor Sethu Vijayakumar of the University of Edinburgh, and Professor Julie A. Shah of MIT, who directs the Interactive Robotics Group -- have selected a problem that offers, quite literally, an armful of challenges: designing a robot that can help people get dressed.

  Country: Europe (0.26)
  Industry: Government > Military (0.35)

Robots Dress Humans Without The Full Picture - AI Summary

#artificialintelligence

"Much harder are tasks that require situational awareness, involving almost instantaneous adaptations to changing circumstances in the environment," explains Theodoros Stouraitis, a visiting scientist in the Interactive Robotics Group at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL). "Things become even more complicated when a robot has to interact with a human and work together to safely and successfully complete a task," adds Shen Li, a Ph.D. candidate in the MIT Department of Aeronautics and Astronautics. In a new work, described in a paper that appears in an April 2022 issue of IEEE Robotics and Automation, Li, Stouraitis, Gienger, Vijayakumar, and Shah explain the headway they've made on a more demanding problem--robot-assisted dressing with sleeved clothes. While other researchers have made state estimation predictions of this sort, what distinguishes this new work is that the MIT investigators and their partners can set a clear upper limit on the uncertainty and guarantee that the elbow will be somewhere within a prescribed box. Such an algorithm could, for instance, guide a robot to recognize the intentions of its human partner as it works collaboratively to move blocks around in an orderly manner or set a dinner table.


Robots dress humans without the full picture

#artificialintelligence

Robots are already adept at certain things, such as lifting objects that are too heavy or cumbersome for people to manage. Another application they're well suited for is the precision assembly of items like watches that have large numbers of tiny parts -- some so small they can barely be seen with the naked eye. "Much harder are tasks that require situational awareness, involving almost instantaneous adaptations to changing circumstances in the environment," explains Theodoros Stouraitis, a visiting scientist in the Interactive Robotics Group at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL). "Things become even more complicated when a robot has to interact with a human and work together to safely and successfully complete a task," adds Shen Li, a PhD candidate in the MIT Department of Aeronautics and Astronautics. Li and Stouraitis -- along with Michael Gienger of the Honda Research Institute Europe, Professor Sethu Vijayakumar of the University of Edinburgh, and Professor Julie A. Shah of MIT, who directs the Interactive Robotics Group -- have selected a problem that offers, quite literally, an armful of challenges: designing a robot that can help people get dressed.


Robots dress humans without the full picture

#artificialintelligence

Robots are already adept at certain things, such as lifting objects that are too heavy or cumbersome for people to manage. Another application they're well suited for is the precision assembly of items like watches that have large numbers of tiny parts--some so small they can barely be seen with the naked eye. "Much harder are tasks that require situational awareness, involving almost instantaneous adaptations to changing circumstances in the environment," explains Theodoros Stouraitis, a visiting scientist in the Interactive Robotics Group at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL). "Things become even more complicated when a robot has to interact with a human and work together to safely and successfully complete a task," adds Shen Li, a Ph.D. candidate in the MIT Department of Aeronautics and Astronautics. Li and Stouraitis--along with Michael Gienger of the Honda Research Institute Europe, Professor Sethu Vijayakumar of the University of Edinburgh, and Professor Julie A. Shah of MIT, who directs the Interactive Robotics Group--have selected a problem that offers, quite literally, an armful of challenges: designing a robot that can help people get dressed. Last year, Li and Shah and two other MIT researchers completed a project involving robot-assisted dressing without sleeves.


The data that will change the world is scattered all around us

#artificialintelligence

This article was contributed by Roman Sandler, CTO and cofounder at Ravin AI. It's no secret that AI is changing industries and businesses of all types. Medicine, education, retail, manufacturing, automotive, and many others are impacted by advances in machine intelligence, also known as machine learning, neural network technology, natural language processing, or simply AI. AI-powered technologies have already been responsible for significant efficiencies and improvements in a wide variety of areas -- but this is just the beginning; the AI-wrought changes we've seen so far utilize, by many estimates, only a small amount of all data available. It's safe to say that when we use more data -- much of it unstructured -- things will really get interesting.


Council Post: The Customer Experience Is Finally Seeing Some Magical Moments Powered By AI At Scale

#artificialintelligence

Abhi is the Co-founder & Chief of Product, Tech @ Zylotech with rich experience across AI and customer data management led revenue-ops. Consumers have high expectations when engaging with brands, whether those brands are B2C or B2B. They expect companies they engage with to be omnichannel, engaging with them on multiple devices and all channels, with quality and consistent experiences across all channels. They expect immediacy for nearly everything from answers to questions with relevant content to personalized recommendations and contextualized offers. And they expect interactions with brands to be personalized based on their needs at the moment, not just on their broader needs.


Is accuracy EVERYTHING?

#artificialintelligence

If you have been in machine learning for quite some time then you must be developing models to attain high accuracy, as accuracy is the prime metric to compare models, but what if I tell you that model evaluation does not always consider accuracy only. When we have to evaluate a model we do consider accuracy but what we majorly focus on is how much robust our model is, how will it perform on a different dataset and how much flexibility it has to offer. Accuracy, no doubt, is an important metric to consider but it does not always give the full picture. What we mean when we say that the model is robust is that it has realized and learned about the data in a correct and desirable manner, hence the predictions made by it are close to the actual values. Due to the enormous mathematical techniques involved and uncertain nature of data, it may happen that the model results in better accuracy but fails to realize the data properly and hence performs poorly when the data is varied.


Using AI to give people who are blind the "full picture"

#artificialintelligence

Everything that makes up the web--text, images, video and audio--can be easily discovered. Many people who are blind or have low vision rely on screen readers to make the content of web pages accessible through spoken feedback or braille. For images and graphics, screen readers rely on descriptions created by developers and web authors, which are usually referred to as "alt text" or "alt attributes" in the code. However, there are millions of online images without any description, leading screen readers to say "image," "unlabeled graphic," or a lengthy, unhelpful reading of the image's file name. When a page contains images without descriptions, people who are blind may not get all of the information conveyed, or even worse, it may make the site totally unusable for them.